10 research outputs found

    Computing GCRDs of Approximate Differential Polynomials

    Full text link
    Differential (Ore) type polynomials with approximate polynomial coefficients are introduced. These provide a useful representation of approximate differential operators with a strong algebraic structure, which has been used successfully in the exact, symbolic, setting. We then present an algorithm for the approximate Greatest Common Right Divisor (GCRD) of two approximate differential polynomials, which intuitively is the differential operator whose solutions are those common to the two inputs operators. More formally, given approximate differential polynomials ff and gg, we show how to find "nearby" polynomials f~\widetilde f and g~\widetilde g which have a non-trivial GCRD. Here "nearby" is under a suitably defined norm. The algorithm is a generalization of the SVD-based method of Corless et al. (1995) for the approximate GCD of regular polynomials. We work on an appropriately "linearized" differential Sylvester matrix, to which we apply a block SVD. The algorithm has been implemented in Maple and a demonstration of its robustness is presented.Comment: To appear, Workshop on Symbolic-Numeric Computing (SNC'14) July 201

    Computing Approximate GCRDs of Differential Polynomials

    Get PDF
    We generalize the approximate greatest common divisor problem to the non-commutative, approximate Greatest Common Right Divisor (GCRD) problem of differential polynomials. Algorithms for performing arithmetic on approximate differential polynomials are presented along with certification results and the corresponding number of flops required. Under reasonable assumptions the approximate GCRD problem is well posed. In particular, we show that an approximate GCRD exists under these assumptions and provide counter examples when these assumptions are not satisfied. We introduce algorithms for computing nearby differential polynomials with a GCRD. These differential polynomials are improved through a post-refinement Newton iteration. It is shown that Newton iteration will converge to a unique, optimal solution when the residual is sufficiently small. Furthermore, if our computed solution is not optimal, it is shown that this solution is reasonably close to the optimal solution

    Matrix Polynomials and their Lower Rank Approximations

    Get PDF
    This thesis is a wide ranging work on computing a “lower-rank” approximation of a matrix polynomial using second-order non-linear optimization techniques. Two notions of rank are investigated. The first is the rank as the number of linearly independent rows or columns, which is the classical definition. The other notion considered is the lowest rank of a matrix polynomial when evaluated at a complex number, or the McCoy rank. Together, these two notions of rank allow one to compute a nearby matrix polynomial where the structure of both the left and right kernels is prescribed, along with the structure of both the infinite and finite eigenvalues. The computational theory of the calculus of matrix polynomial valued functions is developed and used in optimization algorithms based on second-order approximations. Special functions studied with a detailed error analysis are the determinant and adjoint of matrix polynomials. The unstructured and structured variants of matrix polynomials are studied in a very general setting in the context of an equality constrained optimization problem. The most general instances of these optimization problems are NP hard to approximate solutions to in a global setting. In most instances we are able to prove that solutions to our optimization problems exist (possibly at infinity) and discuss techniques in conjunction with an implementation to compute local minimizers to the problem. Most of the analysis of these problems is local and done through the Karush-Kuhn-Tucker optimality conditions for constrained optimization problems. We show that most formulations of the problems studied satisfy regularity conditions and admit Lagrange multipliers. Furthermore, we show that under some formulations that the second-order sufficient condition holds for instances of interest of the optimization problems in question. When Lagrange multipliers do not exist, we discuss why, and if it is reasonable to do so, how to regularize the problem. In several instances closed form expressions for the derivatives of matrix polynomial valued functions are derived to assist in analysis of the optimality conditions around a solution. From this analysis it is shown that variants of Newton’s method will have a local rate of convergence that is quadratic with a suitable initial guess for many problems. The implementations are demonstrated on some examples from the literature and several examples are cross-validated with different optimization formulations of the same mathematical problem. We conclude with a special application of the theory developed in this thesis is computing a nearby pair of differential polynomials with a non-trivial greatest common divisor, a non-commutative symbolic-numeric computation problem. We formulate this problem as finding a nearby structured matrix polynomial that is rank deficient in the classical sense

    Computing lower rank approximations of matrix polynomials

    No full text
    The final publication is available at Elsevier via https://doi.org/10.1016/j.jsc.2019.07.012. © 2019. This manuscript version is made available under the CC-BY-NC-ND 4.0 license http://creativecommons.org/licenses/by-nc-nd/4.0/Given an input matrix polynomial whose coefficients are floating point numbers, we consider the problem of finding the nearest matrix polynomial which has rank at most a specified value. This generalizes the problem of finding a nearest matrix polynomial that is algebraically singular with a prescribed lower bound on the dimension given in a previous paper by the authors. In this paper we prove that such lower rank matrices at minimal distance always exist, satisfy regularity conditions, and are all isolated and surrounded by a basin of attraction of non-minimal solutions. In addition, we present an iterative algorithm which, on given input sufficiently close to a rank-at-most matrix, produces that matrix. The algorithm is efficient and is proven to converge quadratically given a sufficiently good starting point. An implementation demonstrates the effectiveness and numerical robustness of our algorithm in practice.Natural Sciences and Engineering Research Council of Canad
    corecore